286 research outputs found

    Can Clouds Replace Grids? A Real-Life Exabyte-Scale Test-Case

    Get PDF
    The worldâs largest scientific machine â comprising dual 27km circular proton accelerators cooled to 1.9oK and located some 100m underground â currently relies on major production Grid infrastructures for the offline computing needs of the 4 main experiments that will take data at this facility. After many years of sometimes difficult preparation the computing service has been declared âワopenâ and ready to meet the challenges that will come shortly when the machine restarts in 2009. But the service is not without its problems: reliability â as seen by the experiments, as opposed to that measured by the official tools â still needs to be significantly improved. Prolonged downtimes or degradations of major services or even complete sites are still too common and the operational and coordination effort to keep the overall service running is probably not sustainable at this level. Recently âワCloud Computingâ â in terms of pay-per-use fabric provisioning â has emerged as a potentially viable alternative but with rather different strengths and no doubt weaknesses too. Based on the concrete needs of the LHC experiments â where the total data volume that will be acquired over the full lifetime of the project, including the additional data copies that are required by the Computing Models of the experiments, approaches 1 Exabyte â we analyze the pros and cons of Grids versus Clouds. This analysis covers not only t echnical issues â such as those related to demanding database and data management needs â but also sociological aspects, which cannot be ignored, neither in terms of funding nor in the wider context of the essential but often overlooked role of science in society, education and economy

    Grids Today, Clouds on the Horizon

    Get PDF
    By the time of CCP 2008, the worldâs largest scientific machine â the Large Hadron Collider â should have been cooled down to its operational temperature of below 20K and injection tests should have started. Collisions of proton beams at 5 + 5 TeV are expected within one to two months of the initial tests, with data taking at design energy (7 + 7 TeV) now foreseen for 2009. In order to process the data from this world machine, we have put our âワHiggs in one basketâ â that of Grid computing. After many years of preparation, 2008 has seen a final âワCommon Computing Readiness Challengeâ (CCRCâ08) â aimed at demonstrating full readiness for 2008 data taking, processing and analysis. By definition, this relies on a worldâwide production Grid infrastructure. But change â as always â is on the horizon. The current funding model for Grids â which in Europe has been through 3 generations of EGEE projects, together with related projects in other parts of the world, including South America â is evolving towards a longâterm, sustainable eâinfrastructure, like the European Grid Initiative (EGI). At the same time, (potentially?) new paradigms, such as that of âワCloud Computingâ are emerging. This talk summarizes the (successful) results of CCRCâ08 and discusses the potential impact of future Grid funding on both regional and international application communities. It contrasts Grid and Cloud computing mode ls from both technical and sociological points of view. Finally, it discusses the requirements from production application communities, in terms of stability and continuity in the medium to long term

    The IEEE mass storage system reference model

    Get PDF

    Lessons Learnt from WLCG Service Deployment

    Get PDF
    This paper summarises the main lessons learnt from deploying WLCG production services, with a focus on Reliability, Scalability, Accountability, which lead to both manageability and usability. Each topic is analysed in turn. Techniques for zero-user-visible downtime for the main service interventions are described, together with pathological cases that need special treatment. The requirements in terms of scalability are analysed, calling for as much robustness and automation in the service as possible. The different aspects of accountability - which covers measuring / tracking / logging / monitoring what is going on -- and has gone on - is examined, with the goal of attaining a manageable service. Finally, a simple analogy is drawn with the Web in terms of usability - what do we need to achieve to cross the chasm from small-scale adoption to ubiquity

    Databases in High Energy Physics: a critial review

    Get PDF
    The year 2000 is marked by a plethora of significant milestones in the history of High Energy Physics. Not only the true numerical end to the second millennium, this watershed year saw the final run of CERN's Large Electron-Positron collider (LEP) - the world-class machine that had been the focus of the lives of many of us for such a long time. It is also closely related to the subject of this chapter in the following respects: - Classified as a nuclear installation, information on the LEP machine must be retained indefinitely. This represents a challenge to the database community that is almost beyond discussion - archiving of data for a relatively small number of years is indeed feasible, but retaining it for centuries, millennia or more is a very different issue; - There are strong scientific arguments as to why the data from the LEP machine should be retained for a short period. However, the complexity of the data itself, the associated metadata and the programs that manipulate it make even this a huge challenge; - The story of databases in HEP is closely linked to that of LEP itself: what were the basic requirements that were identified in the early years of LEP preparation? How well have these been satisfied? What are the remaining issues and key messages? - Finally, the year 2000 also marked the entry of Grid architectures into the central stage of HEP computing. How has the Grid affected the requirements on databases or the manner in which they are deployed? Furthermore, as the LEP tunnel and even parts of the detectors that it housed are readied for re-use for the Large Hadron Collider (LHC), how have our requirements on databases evolved at this new scale of computing? A number of the key players in the field of databases - as can be seen from the author list of the various publications - have since retired from the field or else this world. Given the fallibility of human memory, the need for a record of the use of databases for physics data processing is clearly needed before memories fade completely and the story is lost forever. It is necessarily somewhat CERN-centric, although effort has been made to cover important developments and events elsewhere. Frequent reference is made to the Computing in High Energy Physics (CHEP) conference series - the most accessible and consistent record of this field

    WLCG Input to Pisa workshop on Resilience-Explicit Computing in Grids

    Get PDF
    This document summarizes the input from the Worldwide LHC Computing Grid (WLCG) to the workshop held on Resilience-Explicit Computing in Grids in Pisa, July 14th 2008. The techniques on which WLCG services have been built have been described in numerous papers, including [1][2][3]. They are based on many years of experience in delivering reliable services, using knowledge gained from the LEP era and from other High Energy Physics experiments around the world

    Understanding the needs of carers of people with psychosis in primary care

    Get PDF
    No abstrac

    Robust and Resilient Services â How to design, build and operate them

    Get PDF
    Grid infrastructures require a high degree of fault tolerance and reliability. This can only be achieved by careful planning and detailed implementation. We describe on-going work within the WLCG project to build and run highly reliable services. Following the "a priori" analysis based on the services and service levels listed in the Memorandum of Understanding that sites participating in WLCG have signed[1], this paper provides an "a posteriori" analysis following over 2 years of production service. This work covers not only the services deployed at the Tier0 centre at CERN - which has the most stringent service requirements related to the acquisition of the raw data, the initial processing phase and the distribution of raw and processed data to Tier1 sites, but also a similar analysis for Tier1 and major Tier2 sites. The latter will be covered at a workshop that will take place shortly before the EELA conference and so will be very up-to-date

    A Worldwide Production Grid Service Built on EGEE and OSG Infrastructures â Lessons Learnt and Long-term Requirements

    Get PDF
    Using the Grid Infrastructures provided by EGEE, OSG and others, a worldwide production service has been built that provides the computing and storage needs for the 4 main physics collaborations at CERN's Large Hadron Collider (LHC). The large number of users, their geographical distribution and the very high service availability requirements make this experience of Grid usage worth studying for the sake of a solid and scalable future operation. This service must cater for the needs of thousands of physicists in hundreds of institutes in tens of countries. A 24x7 service with availability of up to 99% is required with major service responsibilities at each of some ten "Tier1" and of the order of one hundred "Tier2" sites. Such a service - which has been operating for some 2 years and will be required for at least an additional decade - has required significant manpower and resource investments from all concerned and is considered a major achievement in the field of Grid computing. We describe the main lessons learned in offering a production service across heterogeneous Grids as well as the requirements for long-term operation and sustainability

    Gaussian tree constraints applied to acoustic linguistic functional data

    Get PDF
    Evolutionary models of languages are usually considered to take the form of trees. With the development of so-called tree constraints the plausibility of the tree model assumptions can be assessed by checking whether the moments of observed variables lie within regions consistent with Gaussian latent tree models. In our linguistic application, the data set comprises acoustic samples (audio recordings) from speakers of five Romance languages or dialects. The aim is to assess these functional data for compatibility with a hereditary tree model at the language level. A novel combination of canonical function analysis (CFA) with a separable covariance structure produces a representative basis for the data. The separable-CFA basis is formed of components which emphasize language differences whilst maintaining the integrity of the observational language-groupings. A previously unexploited Gaussian tree constraint is then applied to component-by-component projections of the data to investigate adherence to an evolutionary tree. The results highlight some aspects of Romance language speech that appear compatible with an evolutionary tree model but indicates that it would be inappropriate to model all features as such
    • …
    corecore